6 research outputs found
CellProfiler plugins -- an easy image analysis platform integration for containers and Python tools
CellProfiler is a widely used software for creating reproducible, reusable
image analysis workflows without needing to code. In addition to the >90
modules that make up the main CellProfiler program, CellProfiler has a plugins
system that allows for creation of new modules which integrate with other
Python tools or tools that are packaged in software containers. The
CellProfiler-plugins repository contains a number of these CellProfiler
modules, especially modules that are experimental and/or dependency-heavy.
Here, we present an upgraded CellProfiler-plugins repository with examples of
accessing containerized tools, improved documentation, and added
citation/reference tools to facilitate the use and contribution of the
community.Comment: 17 pages, 2 figures, 1 tabl
Pseudo-Labeling Enhanced by Privileged Information and Its Application to In Situ Sequencing Images
Various strategies for label-scarce object detection have been explored by
the computer vision research community. These strategies mainly rely on
assumptions that are specific to natural images and not directly applicable to
the biological and biomedical vision domains. For example, most semi-supervised
learning strategies rely on a small set of labeled data as a confident source
of ground truth. In many biological vision applications, however, the ground
truth is unknown and indirect information might be available in the form of
noisy estimations or orthogonal evidence. In this work, we frame a crucial
problem in spatial transcriptomics - decoding barcodes from In-Situ-Sequencing
(ISS) images - as a semi-supervised object detection (SSOD) problem. Our
proposed framework incorporates additional available sources of information
into a semi-supervised learning framework in the form of privileged
information. The privileged information is incorporated into the teacher's
pseudo-labeling in a teacher-student self-training iteration. Although the
available privileged information could be data domain specific, we have
introduced a general strategy of pseudo-labeling enhanced by privileged
information (PLePI) and exemplified the concept using ISS images, as well on
the COCO benchmark using extra evidence provided by CLIP.Comment: This paper has been accepted for publication at IJCAI 202
Optimizing the Cell Painting assay for image-based profiling
In image-based profiling, software extracts thousands of morphological features of cells from multi-channel fluorescence microscopy images, yielding single-cell profiles that can be used for basic research and drug discovery. Powerful applications have been proven, including clustering chemical and genetic perturbations based on their similar morphological impact, identifying disease phenotypes by observing differences in profiles between healthy and diseased cells, and predicting assay outcomes using machine learning, among many others. Here we provide an updated protocol for the most popular assay for image-based profiling, Cell Painting. Introduced in 2013, it uses six stains imaged in five channels and labels eight diverse components of the cell: DNA, cytoplasmic RNA, nucleoli, actin, Golgi apparatus, plasma membrane, endoplasmic reticulum, and mitochondria. The original protocol was updated in 2016 based on several years’ experience running it at two sites, after optimizing it by visual stain quality. Here we describe the work of the Joint Undertaking for Morphological Profiling (JUMP) Cell Painting Consortium, aiming to improve upon the assay via quantitative optimization, based on the measured ability of the assay to detect morphological phenotypes and group similar perturbations together. We find that the assay gives very robust outputs despite a variety of changes to the protocol and that two vendors’ dyes work equivalently well. We present Cell Painting version 3, in which some steps are simplified and several stain concentrations can be reduced, saving costs. Cell culture and image acquisition take 1–2 weeks for a typically sized batch of 20 or fewer plates; feature extraction and data analysis take an additional 1–2 weeks
The Multi-modality Cell Segmentation Challenge: Towards Universal Solutions
Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyperparameters in different experimental settings. Here, we present a multi-modality cell segmentation benchmark, comprising over 1500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deep-learning algorithm that not only exceeds existing methods, but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging